-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Model Discovery #75
Conversation
When we talked about the model_params in the refinement, I had an idea in regards to your static variant. What if instead of
You instantiate the models first, and then use them to create a config with proper autocomplete, e.g.:
or depending on the implementation:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks like a great improvement, generally, I would like to stick to async / await
instead of .then().catch()
, and keep function calls shorter by destructuring the APIs, instead of:
WayTooLongApiCallFromSomeApi.theOneFunctionIActuallyNeed() <--- I get Java PTSD flashbacks
const { theOneFunctionIActuallyNeed } = WayTooLongApiCallFromSomeApi <--- the destructure variant
Additionally, if there is a way to avoid the many any
types, we should try to narrow them as much as we can.
Other than that just some general suggestions for the open TODOs 👍
@@ -28,8 +24,7 @@ describe('GenAiHubClient', () => { | |||
}); | |||
|
|||
it('calls chatCompletion with minimum configuration', async () => { | |||
const request: GenAiHubCompletionParameters = { | |||
deploymentConfiguration, | |||
const request = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Follow up for making the constants usable with orchestration: https://github.tools.sap/AI/gen-ai-hub-sdk-js-backlog/issues/91
Follow up for adding convenience for model_params
: https://github.tools.sap/AI/gen-ai-hub-sdk-js-backlog/issues/89#issuecomment-7440288
data: OpenAiChatCompletionParameters, | ||
deploymentResolver?: DeploymentResolver, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Follow-up BLI to improve the deploymentResolver
: https://github.tools.sap/AI/gen-ai-hub-sdk-js-backlog/issues/92
} | ||
const llm = | ||
typeof model === 'string' ? { name: model, version: 'latest' } : model; | ||
const deployment = await resolveDeployment({ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if resolver
is a function it is currently ignored. See https://github.tools.sap/AI/gen-ai-hub-sdk-js-backlog/issues/92
@@ -1,17 +1,10 @@ | |||
export * from './client/index.js'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this fine?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As long as no checks fail, yes. I think the API check is not in place yet, but once it is this might cause problems, not sure. I'd suggest to fix it then if needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, I left a few minor comments and answers to questions you asked.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unintentionally dismissed Marika's LGTM when I resolved one of her comments 😬.
LGTM
apiVersion | ||
}, | ||
data, | ||
requestConfig | ||
mergeRequestConfig(requestConfig) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[q] Aren't we calling merge twice? Once here and then again in executeRequest
?
'content-type': 'application/json' | ||
}, | ||
params: { 'api-version': apiVersion }, | ||
...requestConfig |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here the requestConfig will entirely replace the headers and params keys.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, this is intended. My thinking was that the open AI client should only mix in whatever is specific to the open ai client.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Context
Closes AI/gen-ai-hub-sdk-js-backlog#57
API Design
Variant 1:
Autocompletion is available for
name
andtype
.Variant 2:
Variant 3:
Autocompletion is available for the name.
Decision: We choose alternative 3, as it is the most JS-like style. We can still choose to offer
1
or2
in addition the future, if we see need for it.